GUPFS: The Global Unified Parallel File System Project at NERSC

نویسندگان

  • Gregory Butler
  • Rei Lee
  • Mike Welcome
چکیده

The Global Unified Parallel File System (GUPFS) project is a five -year project to provide a scalable, high -performance, high -bandwidth, shared file system for the National Energy Research Scientific Computing Center (NERS C). This paper presents the GUPFS testbed configuration, our benchmarking methodology, and some preliminary results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

GPFS on a Cray XT

The NERSC Global File System (NGF) is a center-wide production file system at NERSC based on IBM’s GPFS. In this paper we will give an overview of GPFS and the NGF architecture. This will include a comparison of features and capabilities between GPFS and Lustre. We will discuss integrating GPFS with a Cray XT system. This configuration relies heavily on Cray DVS. We will describe DVS and discus...

متن کامل

Tuning HDF5 subfiling performance on parallel file systems

Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable ...

متن کامل

Project Report for Project Shared Parallel File System

In recent years, shared parallel file system has been a new hot area for the high throughput computing. Many systems have been developed for this purpose, which include PVFS2 and GFS. These systems were being studied in my project. In this project, the concept of such a shared parallel file system has been built up by installing it on four nodes of the LISA cluster at SARA (Stichting Academisch...

متن کامل

Deploying Server-side File System Monitoring at NERSC

The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleane...

متن کامل

MPI-I/O on Franklin XT4 System at NERSC

Prior to a software upgrade and hardware maintenance on March 17th 2009 on the Frankin Cray XT4 machine at the National Energy Research Scientific Computing (NERSC) Center, MPI-IO shared file performance saw only a small percentage of file-per-processor performance POSIX performance. The March 17th upgrade unintentionally increased I/O performance significantly for a number of applications. Thi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004